Presentation given by Thomas Serre of MIT on April 22, 2009. Given to the Redwood Center for Theoretical Neuroscience at UC Berkeley.
Perception involves a complex interaction between feedforward sensory-driven information and feedback attentional, memory, and executive processes that modulate such feedforward processing. A mechanistic understanding of feedforward and feedback integration is a necessary step towards elucidating key aspects of visual and cognitive functions and dysfunctions.
In this talk, I will describe a computational framework for the study of visual perception. I will present computational as well as experimental evidence suggesting that bottom-up and top-down processes make a distinct and essential contribution to the recognition of complex visual scenes. A feedforward computational architecture may provide a satisfactory account of “immediate recognition” corresponding to the first few hundred milliseconds of visual processing. However, such an architecture may be limited in recognizing complex cluttered visual scenes. Attentional mechanisms and cortical feedback may be necessary to overcome these limitations. Finally, I will show that it is possible to reliably read the mind’s eye from fMRI signals and predict the category of objects that are mentally imagined by human observers. This result constitutes a case in point to argue that cortical feedback may be highly specific.